Neil Cohn (; born 1980) is an American cognitive scientist and comics theorist. His research focuses on the cognition of understanding comics, and uses an interdisciplinary approach combining aspects of theoretical and corpus linguistics with cognitive psychology and cognitive neuroscience. Zimmer, Carl. 2012. The Charlie Brown Effect: A comic book-artist turned-neuroscientist says the images in Peanuts tap the same brain processes as sentences. Discover Magazine. Pp. 68-70 Robson, David. 2013. How the visual language of comics could have its roots in the ice age. The Guardian. November 23, 2013
Cohn’s work argues that common cognitive capacities underlie the processing of various expressive domains, especially verbal and signed languages and what he calls “visual language”—the structure and cognition of drawings and visual narratives, particularly those found in comics. His 2020 book, Who Understands Comics?Cohn, Neil. 2020. Who Understands Comics?: Questioning the Universality of Visual Language Comprehension. London: Bloomsbury. explored the proficiency required to understand visual narratives, and was nominated for a 2021 Eisner Award for Best Academic/Scholarly Work. His theories on visual language provided the foundation for the creation of automatically generated news comics for the BBC. Graphical Storytelling Reaching new audiences with short comics about important health stories. BBC News website.
Cohn's research has also examined the comprehension and linguistic status of emoji. Cohn, Neil. 2015. Will emoji become a new language? BBC Futures. October 12, 2015 Gilmore, Garrett. 2015. Help! I can't stop thinking in emoji! VICE. April 21, 2015 Barrett, Brian. 2016. Facebook messenger finally bridges the great emoji divide Wired Magazine. June 16, 2016 He has also helped propose and design several emoji. Kambhampaty, Anna P. 2021. The Melting Face Emoji Has Already Won Us Over. The New York Times. September 29, 2021
Cohn's primary research program with visual language theory emphasizes that a narrative structure operates as a “grammar” to sequential images analogously to syntax in sentences. While narrative grammar uses a discourse level of information, its function and structure is similar to syntax in that it organizes categorical roles in hierarchic constituents in order to express meaning. Cohn’s work in cognitive neuroscience has suggested that manipulation of this narrative grammar elicits similar brain responses as manipulations of syntax in language (i.e. N400, P600, and Left Anterior Negativity effects). Cohn, Neil. 2013. Visual narrative structure. Cognitive Science. 37(3): 413-452 Cohn, Neil, Martin Paczynski, Ray Jackendoff, Phillip Holcomb, and Gina Kuperberg. 2012. (Pea)nuts and bolts of visual narratives: Structure and meaning in sequential image comprehension. Cognitive Psychology. 65(1): 1-38 Cohn, Neil, Ray Jackendoff, Phillip Holcomb, and Gina Kuperberg. 2014. The grammar of visual narratives: Neural evidence for constituent structure in visual narrative comprehension. Neuropsychologia. 64: 63-70.
In 2020, Cohn was awarded a Starting Grant from the European Research Council to study cross-cultural diversity in the structures of the visual languages used in comics around the world by building a multicultural corpus of annotated comics, and to examine the relationship of those structures to those in spoken languages. Tilburg University press release.
|
|